Dense retrieval aims to map queries and passages into low-dimensional vector space for efficient similarity measuring, showing promising effectiveness in various large-scale retrieval tasks. Since most existing methods commonly adopt pre-trained Transformers (e.g. BERT) for parameter initialization, some work focuses on proposing new pre-training tasks for compressing the useful semantic information from passages into dense vectors, achieving remarkable performances. However, it is still challenging to effectively capture the rich semantic information and relations about passages into the dense vectors via one single particular pre-training task. In this work, we propose a multi-task pre-trained model, MASTER, that unifies and integrates multiple pre-training tasks with different learning objectives under the bottlenecked masked autoencoder architecture. Concretely, MASTER utilizes a multi-decoder architecture to integrate three types of pre-training tasks: corrupted passages recovering, related passage recovering and PLMs outputs recovering. By incorporating a shared deep encoder, we construct a representation bottleneck in our architecture, compressing the abundant semantic information across tasks into dense vectors. The first two types of tasks concentrate on capturing the semantic information of passages and relationships among them within the pre-training corpus. The third one can capture the knowledge beyond the corpus from external PLMs (e.g. GPT-2). Extensive experiments on several large-scale passage retrieval datasets have shown that our approach outperforms the previous state-of-the-art dense retrieval methods. Our code and data are publicly released in https://github.com/microsoft/SimXNS
translated by 谷歌翻译
In task-oriented dialogs such as MultiWoZ (Budzianowski et al., 2018), an informative and/or successful system response needs to include necessary key information such as the phone number of a hotel. Therefore, we hypothesize that by helping the model to focus more on learning key quantities in the dialog, the model can generative more informative and helpful responses. In this paper, we propose a new training algorithm, Reinforced Language Modeling (RLM), that aims to use a fine-grained reward function and reinforcement learning to help the model focus more on generating key quantities correctly during test time. Empirical results show our proposed RLM achieves state-of-the-art performance on the inform rate, success rate, and combined score in MultiWoZ.
translated by 谷歌翻译
合作感知的想法是从多辆车之间的共同感知数据中受益,并克服单车上车载传感器的局限性。但是,由于本地化不准确,通信带宽和模棱两可的融合,多车信息的融合仍然具有挑战性。过去的实践通过放置精确的GNSS定位系统来简化问题,手动指定连接的车辆数量并确定融合策略。本文提出了一个基于地图的合作感​​知框架,名为MAP容器,以提高合作感的准确性和鲁棒性,最终克服了这个问题。概念“地图容器”表示地图是将所有信息转换为地图坐标空间的平台,并将不同的信息源合并到分布式融合体系结构中。在拟议的MAP容器中,考虑使用GNSS信号和传感器功能和地图功能之间的匹配关系以优化环境状态的估计。对仿真数据集和房地车平台的评估结果验证了所提出的方法的有效性。
translated by 谷歌翻译
在本文中,我们考虑了在$ N $代理的分布式优化问题,每个都具有本地成本函数,协作最小化连接网络上的本地成本函数的平均值。为了解决问题,我们提出了一种分布式随机重新洗脱(D-RR)算法,该算法结合了经典分布式梯度下降(DGD)方法和随机重新洗脱(RR)。我们表明D-RR继承了RR的优越性,以使光滑强凸和平的非凸起目标功能。特别是,对于平稳强凸的目标函数,D-RR在平方距离方面实现$ \ Mathcal {o}(1 / T ^ 2)$汇率(这里,$ t $计算迭代总数)在迭代和独特的最小化之间。当假设客观函数是平滑的非凸块并且具有Lipschitz连续组件函数时,我们将D-RR以$ \ Mathcal {O}的速率驱动到0美元的平方标准(1 / T ^ {2 / 3})$。这些收敛结果与集中式RR(最多常数因素)匹配。
translated by 谷歌翻译
实现通用语言情报是自然语言处理的长期目标,标准评估基准发挥基本和指导作用。我们认为,对于通用语言智能评估,基准本身需要全面和系统。为此,我们提出了Cuge,一种中文语言理解和生成评估基准,具有以下特征:(1)分层基准框架,其中数据集主要选择和组织语言能力 - 任务数据集层次结构。 (2)多级评分策略,其中基于分层框架提供了不同级别的模型性能。为了促进CUGE,我们提供了一个公共排行榜,可以自定义,以支持灵活的模型判断标准。代表性预先训练的语言模型的评估结果表明了对通用语言智能的完善的充足空间。 Cuge在Cuge.baai.ac.cn上公开提供。
translated by 谷歌翻译
自动驾驶技术的加速开发对获得大量高质量数据的需求更大。标签,现实世界数据代表性是培训深度学习网络的燃料,对于改善自动驾驶感知算法至关重要。在本文中,我们介绍了PANDASET,由完整的高精度自动车辆传感器套件生产的第一个数据集,具有无需成本商业许可证。使用一个360 {\ DEG}机械纺丝利达,一个前置,远程LIDAR和6个摄像机收集数据集。DataSet包含100多个场景,每个场景为8秒,为目标分类提供28种类型的标签和37种类型的语义分割标签。我们提供仅限LIDAR 3D对象检测的基线,LIDAR-Camera Fusion 3D对象检测和LIDAR点云分割。有关Pandaset和开发套件的更多详细信息,请参阅https://scale.com/open-datasets/pandaset。
translated by 谷歌翻译
活动相机是一种与传统摄像机不同的新型传感器。每个像素通过事件异步触发。触发事件是在像素上照射的亮度的变化。如果亮度的增量或衰减高于某个阈值,则输出事件。与传统相机相比,活动相机具有高动态范围和运动模糊的优点。将事件累积到帧和使用传统的SLAM算法是一种基于事件的SLAM的直接和有效的方法。不同的事件累加器设置,例如事件流的切片方法,没有动作的处理方法,使用极性,衰减功能和事件贡献,可能导致相当不同的累积结果。我们对如何累积事件帧进行研究以实现更好的基于事件的SLAM性能。对于实验验证,累积的事件帧被馈送到传统的SLAM系统以构建基于事件的SLAM系统。我们的设置事件累加器的策略已在公共数据集上进行评估。实验结果表明,与基于最先进的事件帧的SLAM算法相比,我们的方法可以在大多数序列中实现更好的性能。此外,所提出的方法已经在四轮车UAV上进行了测试,以显示实际方案中的应用程序。代码和结果是开放的,以使事件摄像机的研究界受益
translated by 谷歌翻译
联合学习(FL)已成为一个重要的机器学习范例,其中全局模型根据分布式客户端的私有数据培训。然而,由于分布转移,现有的大多数流体算法不能保证对不同客户或不同的样本组的性能公平。最近的研究侧重于在客户之间实现公平性,但它们忽视了敏感属性(例如,性别和/或种族)形成的不同群体的公平,这在实际应用中是重要和实用的。为了弥合这一差距,我们制定统一小组公平的目标,该目标是在不同群体中学习具有类似表现的公平全球模式。为了实现任意敏感属性的统一组公平,我们提出了一种新颖的FL算法,命名为集团分布强制性联邦平均(G-DRFA),其跨组减轻了与收敛速度的理论分析的分布转移。具体而言,我们将联邦全球模型的性能视为目标,并采用分布稳健的技术,以最大化最坏性地组的性能在组重新传递集团的不确定性上。我们在实验中验证了G-DRFA算法的优点,结果表明,G-DRFA算法优于统一组公平现有的公平联合学习算法。
translated by 谷歌翻译
深度神经网络在许多以数据驱动和预测为导向的应用中表现出了出色的性能,有时甚至比人类表现更好。但是,他们最重要的缺点是缺乏解释性,这使得它们在许多现实世界中的吸引力降低了。当与犯罪判断,财务分析和医学诊断等不确定的道德问题或环境因素有关时,必须挖掘模型预测(解释模型知识)的证据,以说服人类。因此,研究如何解释模型知识对于学术研究和实际应用都至关重要。
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译